OpenAI is tightening access to its most advanced AI models. According to a newly published support page, the company may soon require organizations to complete an identity verification process in order to unlock certain future models through its API.
The process, called Verified Organization, aims to establish more accountability and curb misuse. To qualify, organizations will need to submit a government-issued ID from a country supported by OpenAI’s API. Notably, each ID can only be used to verify one organization every 90 days, and not all applicants will be eligible.
OpenAI cites ongoing misuse of its tools by a “small minority of developers” who violate the platform’s usage policies.
“We take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” the company wrote. “The verification process is designed to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
While it’s unclear which future models will fall under this restriction, the move signals a shift toward stricter governance and access controls as OpenAI’s models become more powerful — and potentially more risky.
Developers and businesses looking to leverage upcoming models will likely need to go through a pre-approval process, which may involve delays and eligibility checks. The upside? This could help protect the ecosystem from bad actors, while ensuring legitimate users continue to benefit from cutting-edge AI tools.
OpenAI has yet to announce when this requirement will go into full effect or which specific models will be gated. But with safety, compliance, and model misuse under increasing scrutiny, this step appears to be part of a broader trend toward AI platform accountability.